online learner
- Asia > Middle East > Jordan (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > China > Guangxi Province > Nanning (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.93)
- Information Technology > Artificial Intelligence > Vision (0.93)
- Information Technology > Artificial Intelligence > Natural Language (0.68)
- North America > United States > California > Santa Cruz County > Santa Cruz (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (4 more...)
- Education > Educational Setting > Online (0.54)
- Information Technology (0.46)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > District of Columbia > Washington (0.04)
- Europe > Austria > Styria > Graz (0.04)
- (7 more...)
A Categorizing Popular Ranking Losses Table 1: Categorizing Popular Ranking Losses. Loss Loss Family Sum Loss@p L (`
We summarize the results in Table 1. In ranking literature, many evaluation metrics are often stated in terms of gain functions. When relevance scores are restricted to be binary (i.e. Before we do so, we need some more notation regarding F . By Proposition C.1, this implies that In this section, we prove Theorem 4.2 which characterizes the agnostic P AC learnability of an arbitrary hypothesis class We begin with Lemma C.2 which asserts that if for all ERM is an agnostic P AC learner for H w.r.t ` The proof of Lemma C.2 is similar to the proof of Lemma 4.3 and involves bounding the empirical Proposition C.1, this will imply that By Proposition C.1, this implies that Next, Lemma C.3 extends the learnability of The proof of Lemma C.3 follows the same the exact same strategy used in proving Lemma 4.4.
Attacks on Online Learners: a Teacher-Student Analysis
Machine learning models are famously vulnerable to adversarial attacks: small ad-hoc perturbations of the data that can catastrophically alter the model predictions. While a large literature has studied the case of test-time attacks on pre-trained models, the important case of attacks in an online learning setting has received little attention so far. In this work, we use a control-theoretical perspective to study the scenario where an attacker may perturb data labels to manipulate the learning dynamics of an online learner. We perform a theoretical analysis of the problem in a teacher-student setup, considering different attack strategies, and obtaining analytical results for the steady state of simple linear learners. These results enable us to prove that a discontinuous transition in the learner's accuracy occurs when the attack strength exceeds a critical threshold. We then study empirically attacks on learners with complex architectures using real data, confirming the insights of our theoretical analysis. Our findings show that greedy attacks can be extremely efficient, especially when data stream in small batches.
- Information Technology > Security & Privacy (0.60)
- Government > Military (0.60)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > California > Santa Cruz County > Santa Cruz (0.04)
- Europe > Hungary > Budapest > Budapest (0.04)
Private Online Learning against an Adaptive Adversary: Realizable and Agnostic Settings
We revisit the problem of private online learning, in which a learner receives a sequence of $T$ data points and has to respond at each time-step a hypothesis. It is required that the entire stream of output hypotheses should satisfy differential privacy. Prior work of Golowich and Livni [2021] established that every concept class $\mathcal{H}$ with finite Littlestone dimension $d$ is privately online learnable in the realizable setting. In particular, they proposed an algorithm that achieves an $O_{d}(\log T)$ mistake bound against an oblivious adversary. However, their approach yields a suboptimal $\tilde{O}_{d}(\sqrt{T})$ bound against an adaptive adversary. In this work, we present a new algorithm with a mistake bound of $O_{d}(\log T)$ against an adaptive adversary, closing this gap. We further investigate the problem in the agnostic setting, which is more general than the realizable setting as it does not impose any assumptions on the data. We give an algorithm that obtains a sublinear regret of $\tilde{O}_d(\sqrt{T})$ for generic Littlestone classes, demonstrating that they are also privately online learnable in the agnostic setting.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Guangdong Province > Guangzhou (0.04)
- Information Technology > Security & Privacy (1.00)
- Education > Educational Setting > Online (0.85)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > China > Guangxi Province > Nanning (0.04)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > California > Santa Cruz County > Santa Cruz (0.04)
- Europe > Hungary > Budapest > Budapest (0.04)